113 research outputs found
Automatic Brain Tumor Segmentation using Cascaded Anisotropic Convolutional Neural Networks
A cascade of fully convolutional neural networks is proposed to segment
multi-modal Magnetic Resonance (MR) images with brain tumor into background and
three hierarchical regions: whole tumor, tumor core and enhancing tumor core.
The cascade is designed to decompose the multi-class segmentation problem into
a sequence of three binary segmentation problems according to the subregion
hierarchy. The whole tumor is segmented in the first step and the bounding box
of the result is used for the tumor core segmentation in the second step. The
enhancing tumor core is then segmented based on the bounding box of the tumor
core segmentation result. Our networks consist of multiple layers of
anisotropic and dilated convolution filters, and they are combined with
multi-view fusion to reduce false positives. Residual connections and
multi-scale predictions are employed in these networks to boost the
segmentation performance. Experiments with BraTS 2017 validation set show that
the proposed method achieved average Dice scores of 0.7859, 0.9050, 0.8378 for
enhancing tumor core, whole tumor and tumor core, respectively. The
corresponding values for BraTS 2017 testing set were 0.7831, 0.8739, and
0.7748, respectively.Comment: 12 pages, 5 figures. MICCAI Brats Challenge 201
Semi-supervised Pathological Image Segmentation via Cross Distillation of Multiple Attentions
Segmentation of pathological images is a crucial step for accurate cancer
diagnosis. However, acquiring dense annotations of such images for training is
labor-intensive and time-consuming. To address this issue, Semi-Supervised
Learning (SSL) has the potential for reducing the annotation cost, but it is
challenged by a large number of unlabeled training images. In this paper, we
propose a novel SSL method based on Cross Distillation of Multiple Attentions
(CDMA) to effectively leverage unlabeled images. Firstly, we propose a
Multi-attention Tri-branch Network (MTNet) that consists of an encoder and a
three-branch decoder, with each branch using a different attention mechanism
that calibrates features in different aspects to generate diverse outputs.
Secondly, we introduce Cross Decoder Knowledge Distillation (CDKD) between the
three decoder branches, allowing them to learn from each other's soft labels to
mitigate the negative impact of incorrect pseudo labels in training.
Additionally, uncertainty minimization is applied to the average prediction of
the three branches, which further regularizes predictions on unlabeled images
and encourages inter-branch consistency. Our proposed CDMA was compared with
eight state-of-the-art SSL methods on the public DigestPath dataset, and the
experimental results showed that our method outperforms the other approaches
under different annotation ratios. The code is available at
\href{https://github.com/HiLab-git/CDMA}{https://github.com/HiLab-git/CDMA.}Comment: Provisional Accepted by MICCAI 202
Conjugate Calculation of Gas Turbine Vanes Cooled with Leading Edge Films
AbstractConjugate calculation methodology is used to simulate the C3X gas turbine vanes cooled with leading edge films of “shower-head” type. By comparing calculated results of different turbulence models with the measured data, it is clear that calculation with the transition model can better simulate the flow and heat transfer in the boundary layers with leading edge film cooling. In the laminar boundary layers, on the upstream suction side, the film cooling flow presents 3D turbulent characteristics before transition, which quickly disappear on the downstream suction side owing to its intensified mixing with hot gas boundary layer after transition. On the pressure side, the film cooling flow retains the 3D turbulent characteristics all the time because the local boundary layers' consistent laminar flow retains a smooth mixing of the cooling flow and the hot gas. The temperature gradients formed between the cooled metallic vane and the hot gas can improve the stability of the boundary layer flow because the gradients possess a self stable convective structure
Magnetic anisotropy in hole-doped superconducting Ba 0.67K 0.33Fe 2As2 probed by polarized inelastic neutron scattering
We use polarized inelastic neutron scattering (INS) to study spin excitations
of optimally hole-doped superconductor BaKFeAs
( K).
In the normal state, the imaginary part of the dynamic susceptibility,
, shows magnetic anisotropy for energies below
7 meV with c-axis polarized spin excitations larger than that of the
in-plane component. Upon entering into the superconducting state, previous
unpolarized INS experiments have shown that spin gaps at 5 and 0.75 meV
open at wave vectors and , respectively, with a
broad neutron spin resonance at meV. Our neutron polarization analysis
reveals that the large difference in spin gaps is purely due to different spin
gaps in the c-axis and in-plane polarized spin excitations, resulting resonance
with different energy widths for the c-axis and in-plane spin excitations. The
observation of spin anisotropy in both opitmally electron and hole-doped
BaFeAs is due to their proximity to the AF ordered BaFeAs where
spin anisotropy exists below .Comment: 5 pages, 4 figure
Semi-supervised Medical Image Segmentation through Dual-task Consistency
Deep learning-based semi-supervised learning (SSL) algorithms have led to
promising results in medical images segmentation and can alleviate doctors'
expensive annotations by leveraging unlabeled data. However, most of the
existing SSL algorithms in literature tend to regularize the model training by
perturbing networks and/or data. Observing that multi/dual-task learning
attends to various levels of information which have inherent prediction
perturbation, we ask the question in this work: can we explicitly build
task-level regularization rather than implicitly constructing networks- and/or
data-level perturbation-and-transformation for SSL? To answer this question, we
propose a novel dual-task-consistency semi-supervised framework for the first
time. Concretely, we use a dual-task deep network that jointly predicts a
pixel-wise segmentation map and a geometry-aware level set representation of
the target. The level set representation is converted to an approximated
segmentation map through a differentiable task transform layer. Simultaneously,
we introduce a dual-task consistency regularization between the level
set-derived segmentation maps and directly predicted segmentation maps for both
labeled and unlabeled data. Extensive experiments on two public datasets show
that our method can largely improve the performance by incorporating the
unlabeled data. Meanwhile, our framework outperforms the state-of-the-art
semi-supervised medical image segmentation methods. Code is available at:
https://github.com/Luoxd1996/DTCComment: 9 pages, 4 figure
Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks
Despite the state-of-the-art performance for medical image segmentation, deep
convolutional neural networks (CNNs) have rarely provided uncertainty
estimations regarding their segmentation outputs, e.g., model (epistemic) and
image-based (aleatoric) uncertainties. In this work, we analyze these different
types of uncertainties for CNN-based 2D and 3D medical image segmentation
tasks. We additionally propose a test-time augmentation-based aleatoric
uncertainty to analyze the effect of different transformations of the input
image on the segmentation output. Test-time augmentation has been previously
used to improve segmentation accuracy, yet not been formulated in a consistent
mathematical framework. Hence, we also propose a theoretical formulation of
test-time augmentation, where a distribution of the prediction is estimated by
Monte Carlo simulation with prior distributions of parameters in an image
acquisition model that involves image transformations and noise. We compare and
combine our proposed aleatoric uncertainty with model uncertainty. Experiments
with segmentation of fetal brains and brain tumors from 2D and 3D Magnetic
Resonance Images (MRI) showed that 1) the test-time augmentation-based
aleatoric uncertainty provides a better uncertainty estimation than calculating
the test-time dropout-based model uncertainty alone and helps to reduce
overconfident incorrect predictions, and 2) our test-time augmentation
outperforms a single-prediction baseline and dropout-based multiple
predictions.Comment: 13 pages, 8 figures, accepted by NeuroComputin
MIS-FM: 3D Medical Image Segmentation using Foundation Models Pretrained on a Large-Scale Unannotated Dataset
Pretraining with large-scale 3D volumes has a potential for improving the
segmentation performance on a target medical image dataset where the training
images and annotations are limited. Due to the high cost of acquiring
pixel-level segmentation annotations on the large-scale pretraining dataset,
pretraining with unannotated images is highly desirable. In this work, we
propose a novel self-supervised learning strategy named Volume Fusion (VF) for
pretraining 3D segmentation models. It fuses several random patches from a
foreground sub-volume to a background sub-volume based on a predefined set of
discrete fusion coefficients, and forces the model to predict the fusion
coefficient of each voxel, which is formulated as a self-supervised
segmentation task without manual annotations. Additionally, we propose a novel
network architecture based on parallel convolution and transformer blocks that
is suitable to be transferred to different downstream segmentation tasks with
various scales of organs and lesions. The proposed model was pretrained with
110k unannotated 3D CT volumes, and experiments with different downstream
segmentation targets including head and neck organs, thoracic/abdominal organs
showed that our pretrained model largely outperformed training from scratch and
several state-of-the-art self-supervised training methods and segmentation
models. The code and pretrained model are available at
https://github.com/openmedlab/MIS-FM.Comment: 13 pages, 8 figure
- …